Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 20 de 105
Filter
1.
Journal of Computational and Graphical Statistics ; 32(2):483-500, 2023.
Article in English | ProQuest Central | ID: covidwho-20241312

ABSTRACT

In this article, a multivariate count distribution with Conway-Maxwell (COM)-Poisson marginals is proposed. To do this, we develop a modification of the Sarmanov method for constructing multivariate distributions. Our multivariate COM-Poisson (MultCOMP) model has desirable features such as (i) it admits a flexible covariance matrix allowing for both negative and positive nondiagonal entries;(ii) it overcomes the limitation of the existing bivariate COM-Poisson distributions in the literature that do not have COM-Poisson marginals;(iii) it allows for the analysis of multivariate counts and is not just limited to bivariate counts. Inferential challenges are presented by the likelihood specification as it depends on a number of intractable normalizing constants involving the model parameters. These obstacles motivate us to propose Bayesian inferential approaches where the resulting doubly intractable posterior is handled with via the noisy exchange algorithm or the Grouped Independence Metropolis–Hastings algorithm. Numerical experiments based on simulations are presented to illustrate the proposed Bayesian approach. We demonstrate the potential of the MultCOMP model through a real data application on the numbers of goals scored by the home and away teams in the English Premier League from 2018 to 2021. Here, our interest is to assess the effect of a lack of crowds during the COVID-19 pandemic on the well-known home team advantage. A MultCOMP model fit shows that there is evidence of a decreased number of goals scored by the home team, not accompanied by a reduced score from the opponent. Hence, our analysis suggests a smaller home team advantage in the absence of crowds, which agrees with the opinion of several football experts. Supplementary materials for this article are available online.

2.
Journal of Business & Economic Statistics ; 41(3):667-682, 2023.
Article in English | ProQuest Central | ID: covidwho-20233902

ABSTRACT

We provide a methodology that efficiently combines the statistical models of nowcasting with the survey information for improving the (density) nowcasting of U.S. real GDP. Specifically, we use the conventional dynamic factor model together with stochastic volatility components as the baseline statistical model. We augment the model with information from the survey expectations by aligning the first and second moments of the predictive distribution implied by this baseline model with those extracted from the survey information at various horizons. Results indicate that survey information bears valuable information over the baseline model for nowcasting GDP. While the mean survey predictions deliver valuable information during extreme events such as the Covid-19 pandemic, the variation in the survey participants' predictions, often used as a measure of "ambiguity,” conveys crucial information beyond the mean of those predictions for capturing the tail behavior of the GDP distribution.

3.
J R Stat Soc Ser C Appl Stat ; 69(5): 1269-1283, 2020 Nov.
Article in English | MEDLINE | ID: covidwho-20235905

ABSTRACT

When testing for a rare disease, prevalence estimates can be highly sensitive to uncertainty in the specificity and sensitivity of the test. Bayesian inference is a natural way to propagate these uncertainties, with hierarchical modelling capturing variation in these parameters across experiments. Another concern is the people in the sample not being representative of the general population. Statistical adjustment cannot without strong assumptions correct for selection bias in an opt-in sample, but multilevel regression and post-stratification can at least adjust for known differences between the sample and the population. We demonstrate hierarchical regression and post-stratification models with code in Stan and discuss their application to a controversial recent study of SARS-CoV-2 antibodies in a sample of people from the Stanford University area. Wide posterior intervals make it impossible to evaluate the quantitative claims of that study regarding the number of unreported infections. For future studies, the methods described here should facilitate more accurate estimates of disease prevalence from imperfect tests performed on non-representative samples.

4.
Annals of Applied Statistics ; 17(2):1353-1374, 2023.
Article in English | Web of Science | ID: covidwho-20230860

ABSTRACT

Estimating the true mortality burden of COVID-19 for every country in the world is a difficult, but crucial, public health endeavor. Attributing deaths, direct or indirect, to COVID-19 is problematic. A more attainable target is the "excess deaths," the number of deaths in a particular period, relative to that expected during "normal times," and we develop a model for this endeavor. The excess mortality requires two numbers, the total deaths and the expected deaths, but the former is unavailable for many countries, and so modeling is required for such countries. The expected deaths are based on historic data, and we develop a model for producing estimates of these deaths for all countries. We allow for uncertainty in the modeled expected numbers when calculating the excess. The methods we describe were used to produce the World Health Organization (WHO) excess death estimates. To achieve both interpretability and transparency we developed a relatively simple overdispersed Poisson count framework within which the various data types can be modeled. We use data from countries with national monthly data to build a predictive log-linear regression model with time-varying coefficients for countries without data. For a number of countries, subnational data only are available, and we construct a multinomial model for such data, based on the assumption that the fractions of deaths in subregions remain approximately constant over time. Our inferential approach is Bayesian, with the covariate predictive model being implemented in the fast and accurate INLA software. The subnational modeling was carried out using MCMC in Stan. Based on our modeling, the point estimate for global excess mortality during 2020-2021 is 14.8 million, with a 95% credible interval of (13.2, 16.6) million.

5.
AIMS Mathematics ; 8(7):16790-16824, 2023.
Article in English | Scopus | ID: covidwho-2324418

ABSTRACT

Wastewater sampling for the detection and monitoring of SARS-CoV-2 has been developed and applied at an unprecedented pace, however uncertainty remains when interpreting the measured viral RNA signals and their spatiotemporal variation. The proliferation of measurements that are below a quantifiable threshold, usually during non-endemic periods, poses a further challenge to interpretation and time-series analysis of the data. Inspired by research in the use of a custom Kalman smoother model to estimate the true level of SARS-CoV-2 RNA concentrations in wastewater, we propose an alternative left-censored dynamic linear model. Cross-validation of both models alongside a simple moving average, using data from 286 sewage treatment works across England, allows for a comprehensive validation of the proposed approach. The presented dynamic linear model is more parsimonious, has a faster computational time and is represented by a more flexible modelling framework than the equivalent Kalman smoother. Furthermore we show how the use of wastewater data, transformed by such models, correlates more closely with regional case rate positivity as published by the Office for National Statistics (ONS) Coronavirus (COVID-19) Infection Survey. The modelled output is more robust and is therefore capable of better complementing traditional surveillance than untransformed data or a simple moving average, providing additional confidence and utility for public health decision making. © 2023, American Institute of Mathematical Sciences. All rights reserved.

6.
Statistics in Biopharmaceutical Research ; : 1-33, 2023.
Article in English | Academic Search Complete | ID: covidwho-2313330

ABSTRACT

While the SARS-CoV-2 (COVID-19) pandemic has led to an impressive and unprecedented initiation of clinical research, it has also led to considerable disruption of clinical trials in other disease areas, with around 80% of non-COVID-19 trials stopped or interrupted during the pandemic. In many cases the disrupted trials will not have the planned statistical power necessary to yield interpretable results. This paper describes methods to compensate for the information loss arising from trial disruptions by incorporating additional information available from auxiliary data sources. The methods described include the use of auxiliary data on baseline and early outcome data available from the trial itself and frequentist and Bayesian approaches for the incorporation of information from external data sources. The methods are illustrated by application to the analysis of artificial data based on the Primary care pediatrics Learning Activity Nutrition (PLAN) study, a clinical trial assessing a diet and exercise intervention for overweight children, that was affected by the COVID-19 pandemic. We show how all of the methods proposed lead to an increase in precision relative to use of complete case data only. [ FROM AUTHOR] Copyright of Statistics in Biopharmaceutical Research is the property of Taylor & Francis Ltd and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full . (Copyright applies to all s.)

7.
Spat Spatiotemporal Epidemiol ; 45: 100588, 2023 Jun.
Article in English | MEDLINE | ID: covidwho-2314026

ABSTRACT

To monitor the COVID-19 epidemic in Cuba, data on several epidemiological indicators have been collected on a daily basis for each municipality. Studying the spatio-temporal dynamics in these indicators, and how they behave similarly, can help us better understand how COVID-19 spread across Cuba. Therefore, spatio-temporal models can be used to analyze these indicators. Univariate spatio-temporal models have been thoroughly studied, but when interest lies in studying the association between multiple outcomes, a joint model that allows for association between the spatial and temporal patterns is necessary. The purpose of our study was to develop a multivariate spatio-temporal model to study the association between the weekly number of COVID-19 deaths and the weekly number of imported COVID-19 cases in Cuba during 2021. To allow for correlation between the spatial patterns, a multivariate conditional autoregressive prior (MCAR) was used. Correlation between the temporal patterns was taken into account by using two approaches; either a multivariate random walk prior was used or a multivariate conditional autoregressive prior (MCAR) was used. All models were fitted within a Bayesian framework.


Subject(s)
COVID-19 , Humans , Spatio-Temporal Analysis , Incidence , Bayes Theorem , Cuba/epidemiology
8.
Microbiol Spectr ; 11(3): e0534622, 2023 Jun 15.
Article in English | MEDLINE | ID: covidwho-2317870

ABSTRACT

The first 18 months of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infections in Colombia were characterized by three epidemic waves. During the third wave, from March through August 2021, intervariant competition resulted in Mu replacing Alpha and Gamma. We employed Bayesian phylodynamic inference and epidemiological modeling to characterize the variants in the country during this period of competition. Phylogeographic analysis indicated that Mu did not emerge in Colombia but acquired increased fitness there through local transmission and diversification, contributing to its export to North America and Europe. Despite not having the highest transmissibility, Mu's genetic composition and ability to evade preexisting immunity facilitated its domination of the Colombian epidemic landscape. Our results support previous modeling studies demonstrating that both intrinsic factors (transmissibility and genetic diversity) and extrinsic factors (time of introduction and acquired immunity) influence the outcome of intervariant competition. This analysis will help set practical expectations about the inevitable emergences of new variants and their trajectories. IMPORTANCE Before the appearance of the Omicron variant in late 2021, numerous SARS-CoV-2 variants emerged, were established, and declined, often with different outcomes in different geographic areas. In this study, we considered the trajectory of the Mu variant, which only successfully dominated the epidemic landscape of a single country: Colombia. We demonstrate that Mu competed successfully there due to its early and opportune introduction time in late 2020, combined with its ability to evade immunity granted by prior infection or the first generation of vaccines. Mu likely did not effectively spread outside of Colombia because other immune-evading variants, such as Delta, had arrived in those locales and established themselves first. On the other hand, Mu's early spread within Colombia may have prevented the successful establishment of Delta there. Our analysis highlights the geographic heterogeneity of early SARS-CoV-2 variant spread and helps to reframe the expectations for the competition behaviors of future variants.


Subject(s)
COVID-19 , Humans , Bayes Theorem , COVID-19/epidemiology , Colombia/epidemiology , SARS-CoV-2/genetics
9.
Cogn Sci ; 47(5): e13294, 2023 05.
Article in English | MEDLINE | ID: covidwho-2316745

ABSTRACT

People are known for good predictions in domains they have rich experience with, such as everyday statistics and intuitive physics. But how well can they predict for problems they lack experience with, such as the duration of an ongoing epidemic caused by a new virus? Amid the first wave of COVID-19 in China, we conducted an online diary study, asking each of over 400 participants to predict the remaining duration of the epidemic, once per day for 14 days. Participants' predictions reflected a reasonable use of publicly available information but were meanwhile biased, subject to the influence of negative affect and future time perspectives. Computational modeling revealed that participants neither relied on prior distributions of epidemic durations as in inferring everyday statistics, nor on mechanistic simulations of epidemic dynamics as in computing intuitive physics. Instead, with minimal experience, participants' predictions were best explained by similarity-based generalization of the temporal pattern of epidemic statistics. In two control experiments, we further confirmed that such cognitive algorithm is not specific to the epidemic scenario and that minimal and rich experience do lead to different prediction behaviors for the same observations. We conclude that people generalize patterns in recent history to predict the future under minimal experience.


Subject(s)
COVID-19 , Humans , COVID-19/epidemiology , Generalization, Psychological , Computer Simulation , China/epidemiology
10.
Environmetrics ; 2023.
Article in English | Web of Science | ID: covidwho-2310887

ABSTRACT

Hawkes process are very popular mathematical tools for modeling phenomena exhibiting a self-exciting or self-correcting behavior. Typical examples are earthquakes occurrence, wild-fires, drought, capture-recapture, crime violence, trade exchange, and social network activity. The widespread use of Hawkes process in different fields calls for fast, reproducible, reliable, easy-to-code techniques to implement such models. We offer a technique to perform approximate Bayesian inference of Hawkes process parameters based on the use of the R-package inlabru . The inlabru R-package, in turn, relies on the INLA methodology to approximate the posterior of the parameters. Our Hawkes process approximation is based on a decomposition of the log-likelihood in three parts, which are linearly approximated separately. The linear approximation is performed with respect to the mode of the parameters' posterior distribution, which is determined with an iterative gradient-based method. The approximation of the posterior parameters is therefore deterministic, ensuring full reproducibility of the results. The proposed technique only requires the user to provide the functions to calculate the different parts of the decomposed likelihood, which are internally linearly approximated by the R-package inlabru . We provide a comparison with the bayesianETAS R-package which is based on an MCMC method. The two techniques provide similar results but our approach requires two to ten times less computational time to converge, depending on the amount of data.

11.
Applied Economics ; 2023.
Article in English | Scopus | ID: covidwho-2289253

ABSTRACT

As the Corona pandemic of 2020 has shown, disposing of sufficient hospital capacity is of great importance. Ideally, this capacity should be provided by efficient units, calling for measurement of their performance. However, the standard cost frontier model yields biased efficiency scores because it ignores (often unobserved) heterogeneity between hospitals. In this paper, efficiency scores are derived from a cost function with both random intercept and random slope parameters which overcomes the problem of unobserved heterogeneity in stochastic frontier analysis. Based on an unbalanced panel covering the years 2004 to 2007 and comprising at least 100 Swiss hospitals per year, Bayesian inference points to significant heterogeneity suggesting rejection of the standard cost frontier model. When unobserved heterogeneity is fully accounted for, average estimated inefficiency decreases to 5%, below the 14% (21%, respectively) value reported for a number of European and Middle-Eastern countries (Hollingsworth, 2008;Alawi et al. 2019). Moreover, hospitals rated below 85% efficiency according to the standard model gain up to 12% points. They can provide much needed capacity that otherwise would be discarded on the grounds that they are not sufficiently efficient providers. © 2023 Informa UK Limited, trading as Taylor & Francis Group.

12.
2022 Winter Simulation Conference, WSC 2022 ; 2022-December:496-507, 2022.
Article in English | Scopus | ID: covidwho-2285192

ABSTRACT

COVID-19 related crimes like counterfeit Personal Protective Equipment (PPE) involve complex supply chains with partly unobservable behavior and sparse data, making it challenging to construct a reliable simulation model. Model calibration can help with this, as it is the process of tuning and estimating the model parameters with observed data of the system. A subset of model calibration techniques seems to be able to deal with sparse data in other fields: Genetic Algorithms and Bayesian Inference. However, it is unknown how these techniques perform when accurately calibrating simulation models with sparse data. This research analyzes the quality-of-fit of these two model calibration techniques for a counterfeit PPE simulation model given an increasing degree of data sparseness. The results demonstrate that these techniques are suitable for calibrating a linear supply chain model with randomly missing values. Further research should focus on other techniques, larger set of models, and structural uncertainty. © 2022 IEEE.

13.
Extreme Mech Lett ; 40: 100921, 2020 Oct.
Article in English | MEDLINE | ID: covidwho-2267379

ABSTRACT

Understanding the outbreak dynamics of COVID-19 through the lens of mathematical models is an elusive but significant goal. Within only half a year, the COVID-19 pandemic has resulted in more than 19 million reported cases across 188 countries with more than 700,000 deaths worldwide. Unlike any other disease in history, COVID-19 has generated an unprecedented volume of data, well documented, continuously updated, and broadly available to the general public. Yet, the precise role of mathematical modeling in providing quantitative insight into the COVID-19 pandemic remains a topic of ongoing debate. Here we discuss the lessons learned from six month of modeling COVID-19. We highlight the early success of classical models for infectious diseases and show why these models fail to predict the current outbreak dynamics of COVID-19. We illustrate how data-driven modeling can integrate classical epidemiology modeling and machine learning to infer critical disease parameters-in real time-from reported case data to make informed predictions and guide political decision making. We critically discuss questions that these models can and cannot answer and showcase controversial decisions around the early outbreak dynamics, outbreak control, and exit strategies. We anticipate that this summary will stimulate discussion within the modeling community and help provide guidelines for robust mathematical models to understand and manage the COVID-19 pandemic. EML webinar speakers, videos, and overviews are updated at https://imechanica.org/node/24098.

14.
BMC Public Health ; 23(1): 359, 2023 02 17.
Article in English | MEDLINE | ID: covidwho-2250108

ABSTRACT

BACKGROUND: The spread of the COVID-19 (SARS-CoV-2) and the surging number of cases across the United States have resulted in full hospitals and exhausted health care workers. Limited availability and questionable reliability of the data make outbreak prediction and resource planning difficult. Any estimates or forecasts are subject to high uncertainty and low accuracy to measure such components. The aim of this study is to apply, automate, and assess a Bayesian time series model for the real-time estimation and forecasting of COVID-19 cases and number of hospitalizations in Wisconsin healthcare emergency readiness coalition (HERC) regions. METHODS: This study makes use of the publicly available Wisconsin COVID-19 historical data by county. Cases and effective time-varying reproduction number [Formula: see text] by the HERC region over time are estimated using Bayesian latent variable models. Hospitalizations are estimated by the HERC region over time using a Bayesian regression model. Cases, effective Rt, and hospitalizations are forecasted over a 1-day, 3-day, and 7-day time horizon using the last 28 days of data, and the 20%, 50%, and 90% Bayesian credible intervals of the forecasts are calculated. The frequentist coverage probability is compared to the Bayesian credible level to evaluate performance. RESULTS: For cases and effective [Formula: see text], all three time horizons outperform the three credible levels of the forecast. For hospitalizations, all three time horizons outperform the 20% and 50% credible intervals of the forecast. On the contrary, the 1-day and 3-day periods underperform the 90% credible intervals. Questions about uncertainty quantification should be re-calculated using the frequentist coverage probability of the Bayesian credible interval based on observed data for all three metrics. CONCLUSIONS: We present an approach to automate the real-time estimation and forecasting of cases and hospitalizations and corresponding uncertainty using publicly available data. The models were able to infer short-term trends consistent with reported values at the HERC region level. Additionally, the models were able to accurately forecast and estimate the uncertainty of the measurements. This study can help identify the most affected regions and major outbreaks in the near future. The workflow can be adapted to other geographic regions, states, and even countries where decision-making processes are supported in real-time by the proposed modeling system.


Subject(s)
COVID-19 , Humans , United States , COVID-19/epidemiology , SARS-CoV-2 , Public Health , Bayes Theorem , Wisconsin/epidemiology , Reproducibility of Results , Forecasting , Uncertainty , Hospitalization
15.
BMC Med Res Methodol ; 23(1): 24, 2023 01 25.
Article in English | MEDLINE | ID: covidwho-2248831

ABSTRACT

BACKGROUND: One of the main challenges of the COVID-19 pandemic is to make sense of available, but often heterogeneous and noisy data. This contribution presents a data-driven methodology that allows exploring the hospitalization dynamics of COVID-19, exemplified with a study of 17 autonomous regions in Spain from summer 2020 to summer 2021. METHODS: We use data on new daily cases and hospitalizations reported by the Spanish Ministry of Health to implement a Bayesian inference method that allows making short-term predictions of bed occupancy of COVID-19 patients in each of the autonomous regions of the country. RESULTS: We show how to use the temporal series for the number of daily admissions and discharges from hospital to reproduce the hospitalization dynamics of COVID-19 patients. For the case-study of the region of Aragon, we estimate that the probability of being admitted to hospital care upon infection is 0.090 [0.086-0.094], (95% C.I.), with the distribution governing hospital admission yielding a median interval of 3.5 days and an IQR of 7 days. Likewise, the distribution on the length of stay produces estimates of 12 days for the median and 10 days for the IQR. A comparison between model parameters for the regions analyzed allows to detect differences and changes in policies of the health authorities. CONCLUSIONS: We observe important regional differences, signaling that to properly compare very different populations, it is paramount to acknowledge all the diversity in terms of culture, socio-economic status, and resource availability. To better understand the impact of this pandemic, much more data, disaggregated and properly annotated, should be made available.


Subject(s)
COVID-19 , Humans , COVID-19/epidemiology , SARS-CoV-2 , Spain/epidemiology , Pandemics , Bayes Theorem , Hospitalization
16.
Epidemiol Infect ; 151: e5, 2022 12 16.
Article in English | MEDLINE | ID: covidwho-2243074

ABSTRACT

Quantitative information on epidemiological quantities such as the incubation period and generation time of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) variants is scarce. We analysed a dataset collected during contact tracing activities in the province of Reggio Emilia, Italy, throughout 2021. We determined the distributions of the incubation period for the Alpha and Delta variants using information on negative polymerase chain reaction tests and the date of last exposure from 282 symptomatic cases. We estimated the distributions of the intrinsic generation time using a Bayesian inference approach applied to 9724 SARS-CoV-2 cases clustered in 3545 households where at least one secondary case was recorded. We estimated a mean incubation period of 4.9 days (95% credible intervals, CrI, 4.4-5.4) for Alpha and 4.5 days (95% CrI 4.0-5.0) for Delta. The intrinsic generation time was estimated to have a mean of 7.12 days (95% CrI 6.27-8.44) for Alpha and of 6.52 days (95% CrI 5.54-8.43) for Delta. The household serial interval was 2.43 days (95% CrI 2.29-2.58) for Alpha and 2.74 days (95% CrI 2.62-2.88) for Delta, and the estimated proportion of pre-symptomatic transmission was 48-51% for both variants. These results indicate limited differences in the incubation period and intrinsic generation time of SARS-CoV-2 variants Alpha and Delta compared to ancestral lineages.


Subject(s)
COVID-19 , SARS-CoV-2 , Humans , SARS-CoV-2/genetics , COVID-19/epidemiology , Contact Tracing , Bayes Theorem , Infectious Disease Incubation Period
17.
Journal of Applied Statistics ; 2023.
Article in English | Scopus | ID: covidwho-2235844

ABSTRACT

Considering the context of functional data analysis, we developed and applied a new Bayesian approach via the Gibbs sampler to select basis functions for a finite representation of functional data. The proposed methodology uses Bernoulli latent variables to assign zero to some of the basis function coefficients with a positive probability. This procedure allows for an adaptive basis selection since it can determine the number of bases and which ones should be selected to represent functional data. Moreover, the proposed procedure measures the uncertainty of the selection process and can be applied to multiple curves simultaneously. The methodology developed can deal with observed curves that may differ due to experimental error and random individual differences between subjects, which one can observe in a real dataset application involving daily numbers of COVID-19 cases in Brazil. Simulation studies show the main properties of the proposed method, such as its accuracy in estimating the coefficients and the strength of the procedure to find the true set of basis functions. Despite having been developed in the context of functional data analysis, we also compared the proposed model via simulation with the well-established LASSO and Bayesian LASSO, which are methods developed for non-functional data. © 2023 Informa UK Limited, trading as Taylor & Francis Group.

18.
IEEE Transactions on Parallel and Distributed Systems ; : 2015/01/01 00:00:00.000, 2023.
Article in English | Scopus | ID: covidwho-2232135

ABSTRACT

Simulation-based Inference (SBI) is a widely used set of algorithms to learn the parameters of complex scientific simulation models. While primarily run on CPUs in High-Performance Compute clusters, these algorithms have been shown to scale in performance when developed to be run on massively parallel architectures such as GPUs. While parallelizing existing SBI algorithms provides us with performance gains, this might not be the most efficient way to utilize the achieved parallelism. This work proposes a new parallelism-aware adaptation of an existing SBI method, namely approximate Bayesian computation with Sequential Monte Carlo(ABC-SMC). This new adaptation is designed to utilize the parallelism not only for performance gain, but also toward qualitative benefits in the learnt parameters. The key idea is to replace the notion of a single ‘step-size’hyperparameter, which governs how the state space of parameters is explored during learning, with step-sizes sampled from a tuned Beta distribution. This allows this new ABC-SMC algorithm to more efficiently explore the state-space of the parameters being learned. We test the effectiveness of the proposed algorithm to learn parameters for an epidemiology model running on a Tesla T4 GPU. Compared to the parallelized state-of-the-art SBI algorithm, we get similar quality results in <inline-formula><tex-math notation="LaTeX">$\sim 100 \times$</tex-math></inline-formula> fewer simulations and observe <inline-formula><tex-math notation="LaTeX">$\sim 80 \times$</tex-math></inline-formula> lower run-to-run variance across 10 independent trials. IEEE

19.
Virtual Meeting of the Mexican Statistical Association, AME 2020 and 34FNE meeting, 2021 ; 397:115-129, 2022.
Article in English | Scopus | ID: covidwho-2173619

ABSTRACT

The COVID-19 Pandemic has been one of the most significant health problems in the world. Several academic studies focus on health consequences. However, only a few studies have paid attention to analyzing government strategies as general public guidelines. In this work, we use a Susceptible-Infected-Recovered-Deceased (SIRD) model to assess infection rate, mortality rate, and the effects of the Mexican government intervention during the three waves of the COVID-19 pandemic. We carry out Bayesian inferences on the proposed model using a Robust Adaptive Metropolis (RAM) algorithm. Essentially, the proposed methodology allows appreciating the effects of quarantine on the three pandemic wave's mortality rate. © 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG.

20.
Journal of Econometrics ; 2023.
Article in English | ScienceDirect | ID: covidwho-2165527

ABSTRACT

A flexible predictive density combination is introduced for large financial data sets which allows for model set incompleteness. Dimension reduction procedures that include learning allocate the large sets of predictive densities and combination weights to relatively small subsets. Given the representation of the probability model in extended nonlinear state-space form, efficient simulation-based Bayesian inference is proposed using parallel dynamic clustering as well as nonlinear filtering, implemented on graphics processing units. The approach is applied to combine predictive densities based on a large number of individual US stock returns of daily observations over a period that includes the Covid-19 crisis period. Evidence on dynamic cluster composition, weight patterns and model set incompleteness gives valuable signals for improved modelling. This enables higher predictive accuracy and better assessment of uncertainty and risk for investment fund management.

SELECTION OF CITATIONS
SEARCH DETAIL